11 research outputs found

    A Novel and Automated Approach to Classify Radiation Induced Lung Tissue Damage on CT Scans

    Get PDF
    Radiation-induced lung damage (RILD) is a common side effect of radiotherapy (RT). The ability to automatically segment, classify, and quantify different types of lung parenchymal change is essential to uncover underlying patterns of RILD and their evolution over time. A RILD dedicated tissue classification system was developed to describe lung parenchymal tissue changes on a voxel-wise level. The classification system was automated for segmentation of five lung tissue classes on computed tomography (CT) scans that described incrementally increasing tissue density, ranging from normal lung (Class 1) to consolidation (Class 5). For ground truth data generation, we employed a two-stage data annotation approach, akin to active learning. Manual segmentation was used to train a stage one auto-segmentation method. These results were manually refined and used to train the stage two auto-segmentation algorithm. The stage two auto-segmentation algorithm was an ensemble of six 2D Unets using different loss functions and numbers of input channels. The development dataset used in this study consisted of 40 cases, each with a pre-radiotherapy, 3-, 6-, 12-, and 24-month follow-up CT scans (n = 200 CT scans). The method was assessed on a hold-out test dataset of 6 cases (n = 30 CT scans). The global Dice score coefficients (DSC) achieved for each tissue class were: Class (1) 99% and 98%, Class (2) 71% and 44%, Class (3) 56% and 26%, Class (4) 79% and 47%, and Class (5) 96% and 92%, for development and test subsets, respectively. The lowest values for the test subsets were caused by imaging artefacts or reflected subgroups that occurred infrequently and with smaller overall parenchymal volumes. We performed qualitative evaluation on the test dataset presenting manual and auto-segmentation to a blinded independent radiologist to rate them as 'acceptable', 'minor disagreement' or 'major disagreement'. The auto-segmentation ratings were similar to the manual segmentation, both having approximately 90% of cases rated as acceptable. The proposed framework for auto-segmentation of different lung tissue classes produces acceptable results in the majority of cases and has the potential to facilitate future large studies of RILD

    Developing a framework for CBCT-to-CT synthesis in paediatric abdominal radiotherapy

    Get PDF
    We proposed a CBCT-to-CT synthesis framework tailored for paediatric abdominal patients. Our approach was based on the cycle-consistent generative adversarial network (cycleGAN) modified to preserve structural consistency. To adjust for differences in field-of-view and body size from different patient groups, our training data was spatially co-registered to a common field-of-view and normalised to a fixed size. The proposed framework showed improvements in generating synthetic CTs from CBCTs compared to the original implementation of cycleGAN without field-of-view adjustments and structural consistency constrain

    Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy

    Get PDF
    Objective: Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve quality of cone beam CT (CBCT) images for dose calculation using deep learning. / Approach: We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative 10 Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and smaller patient numbers. We introduced the concept of global residuals only learning to the networks and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the 15 paediatric population, we applied a smart 2D slice selection based on the common field-of-view across the dataset (abdomen). This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen 20 dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics. / Main results: We found improved performance, compared to a baseline implementation, on imagesimilarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0±16.6 proposed vs 58.9±16.8 baseline). There was also a higher level of structural agreement for gastrointestinal gas 25 between source and synthetic images measured through dice similarity overlap (0.872±0.053 proposed vs 0.846±0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3±2.4% proposed vs 3.7±2.8% baseline). / Significance: Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated

    Quantitative Analysis of Radiation-Associated Parenchymal Lung Change

    Get PDF
    We present a novel classification system of the parenchymal features of radiation-induced lung damage (RILD). We developed a deep learning network to automate the delineation of five classes of parenchymal textures. We quantify the volumetric change in classes after radiotherapy in order to allow detailed, quantitative descriptions of the evolution of lung parenchyma up to 24 months after RT, and correlate these with radiotherapy dose and respiratory outcomes. Diagnostic CTs were available pre-RT, and at 3, 6, 12 and 24 months post-RT, for 46 subjects enrolled in a clinical trial of chemoradiotherapy for non-small cell lung cancer. All 230 CT scans were segmented using our network. The five parenchymal classes showed distinct temporal patterns. Moderate correlation was seen between change in tissue class volume and clinical and dosimetric parameters, e.g., the Pearson correlation coefficient was ≤0.49 between V30 and change in Class 2, and was 0.39 between change in Class 1 and decline in FVC. The effect of the local dose on tissue class revealed a strong dose-dependent relationship. Respiratory function measured by spirometry and MRC dyspnoea scores after radiotherapy correlated with the measured radiological RILD. We demonstrate the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible

    Patch-based lung ventilation estimation using multi-layer supervoxels

    Get PDF
    Patch-based approaches have received substantial attention over the recent years in medical imaging. One of their potential applications may be to provide more anatomically consistent ventilation maps estimated on dynamic lung CT. An assessment of regional lung function may act as a guide for radiotherapy, ensuring a more accurate treatment plan. This in turn, could spare well-functioning parts of the lungs. We present a novel method for lung ventilation estimation from dynamic lung CT imaging, combining a supervoxel-based image representation with deformations estimated during deformable image registration, performed between peak breathing phases. For this we propose a method that tracks changes of the intensity of previously extracted supervoxels. For the evaluation of the method we calculate correlation of the estimated ventilation maps with static ventilation images acquired from hyperpolarized Xenon129 MRI. We also investigate the influence of different image registration methods used to estimate deformations between the peak breathing phases in the dynamic CT imaging. We show that our method performs favorably to other ventilation estimation methods commonly used in the field, independently of the image registration method applied to dynamic CT. Due to the patch-based approach of our method, it may be physiologically more consistent with lung anatomy than previous methods relying on voxel-wise relationships. In our method the ventilation is estimated for supervoxels, which tend to group spatially close voxels with similar intensity values. The proposed method was evaluated on a dataset consisting of three lung cancer patients undergoing radiotherapy treatment, and this resulted in a correlation of 0.485 with XeMRI ventilation images, compared with 0.393 for the intensity-based approach, 0.231 for the Jacobian-based method and 0.386 for the Hounsfield units averaging method, on average. Within the limitation of the small number of cases analyzed, results suggest that the presented technique may be advantageous for CT-based ventilation estimation. The results showing higher values of correlation of the proposed method demonstrate the potential of our method to more accurately mimic the lung physiology

    Prognostic Imaging Biomarker Discovery in Survival Analysis for Idiopathic Pulmonary Fibrosis

    No full text
    25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) -- SEP 18-22, 2022 -- Singapore, SINGAPOREImaging biomarkers derived from medical images play an important role in diagnosis, prognosis, and therapy response assessment. Developing prognostic imaging biomarkers which can achieve reliable survival prediction is essential for prognostication across various diseases and imaging modalities. In this work, we propose a method for discovering patch-level imaging patterns which we then use to predict mortality risk and identify prognostic biomarkers. Specifically, a contrastive learning model is first trained on patches to learn patch representations, followed by a clustering method to group similar underlying imaging patterns. The entire medical image can be thus represented by a long sequence of patch representations and their cluster assignments. Then a memory-efficient clustering Vision Transformer is proposed to aggregate all the patches to predict mortality risk of patients and identify high-risk patterns. To demonstrate the effectiveness and generalizability of our model, we test the survival prediction performance of our method on two sets of patients with idiopathic pulmonary fibrosis (IPF), a chronic, progressive, and life-threatening interstitial pneumonia of unknown etiology. Moreover, by comparing the high-risk imaging patterns extracted by our model with existing imaging patterns utilised in clinical practice, we can identify a novel biomarker that may help clinicians improve risk stratification of IPF patients.CSC-UCL Joint Research Scholarship; UK EPSRC [M020533, R006032, R014019, V034537]; Wellcome Trust [UNS113739]; Wellcome Trust Clinical Research Career Development Fellowship [209,553/Z/17/Z]; NIHR UCLH Biomedical Research Centre, UKMICCAI SocAZ is supported by CSC-UCL Joint Research Scholarship. DCA is supported by UK EPSRC grants M020533, R006032, R014019, V034537, Wellcome Trust UNS113739. JJ is supported by Wellcome Trust Clinical Research Career Development Fellowship 209,553/Z/17/Z. DCA and JJ are supported by the NIHR UCLH Biomedical Research Centre, UK

    A Novel and Automated Approach to Classify Radiation Induced Lung Tissue Damage on CT Scans

    No full text
    Radiation-induced lung damage (RILD) is a common side effect of radiotherapy (RT). The ability to automatically segment, classify, and quantify different types of lung parenchymal change is essential to uncover underlying patterns of RILD and their evolution over time. A RILD dedicated tissue classification system was developed to describe lung parenchymal tissue changes on a voxel-wise level. The classification system was automated for segmentation of five lung tissue classes on computed tomography (CT) scans that described incrementally increasing tissue density, ranging from normal lung (Class 1) to consolidation (Class 5). For ground truth data generation, we employed a two-stage data annotation approach, akin to active learning. Manual segmentation was used to train a stage one auto-segmentation method. These results were manually refined and used to train the stage two auto-segmentation algorithm. The stage two auto-segmentation algorithm was an ensemble of six 2D Unets using different loss functions and numbers of input channels. The development dataset used in this study consisted of 40 cases, each with a pre-radiotherapy, 3-, 6-, 12-, and 24-month follow-up CT scans (n = 200 CT scans). The method was assessed on a hold-out test dataset of 6 cases (n = 30 CT scans). The global Dice score coefficients (DSC) achieved for each tissue class were: Class (1) 99% and 98%, Class (2) 71% and 44%, Class (3) 56% and 26%, Class (4) 79% and 47%, and Class (5) 96% and 92%, for development and test subsets, respectively. The lowest values for the test subsets were caused by imaging artefacts or reflected subgroups that occurred infrequently and with smaller overall parenchymal volumes. We performed qualitative evaluation on the test dataset presenting manual and auto-segmentation to a blinded independent radiologist to rate them as ‘acceptable’, ‘minor disagreement’ or ‘major disagreement’. The auto-segmentation ratings were similar to the manual segmentation, both having approximately 90% of cases rated as acceptable. The proposed framework for auto-segmentation of different lung tissue classes produces acceptable results in the majority of cases and has the potential to facilitate future large studies of RILD

    Prognostic Imaging Biomarker Discovery in Survival Analysis for Idiopathic Pulmonary Fibrosis

    No full text
    Imaging biomarkers derived from medical images play an important role in diagnosis, prognosis, and therapy response assessment. Developing prognostic imaging biomarkers which can achieve reliable survival prediction is essential for prognostication across various diseases and imaging modalities. In this work, we propose a method for discovering patch-level imaging patterns which we then use to predict mortality risk and identify prognostic biomarkers. Specifically, a contrastive learning model is first trained on patches to learn patch representations, followed by a clustering method to group similar underlying imaging patterns. The entire medical image can be thus represented by a long sequence of patch representations and their cluster assignments. Then a memory-efficient clustering Vision Transformer is proposed to aggregate all the patches to predict mortality risk of patients and identify high-risk patterns. To demonstrate the effectiveness and generalizability of our model, we test the survival prediction performance of our method on two sets of patients with idiopathic pulmonary fibrosis (IPF), a chronic, progressive, and life-threatening interstitial pneumonia of unknown etiology. Moreover, by comparing the high-risk imaging patterns extracted by our model with existing imaging patterns utilised in clinical practice, we can identify a novel biomarker that may help clinicians improve risk stratification of IPF patients.</p
    corecore